A process is represented by its address space (PCB) which contains all the virtial to physical address mapping, for data/file and executation context.
Thread:
If t_idle > 2*t_ctx_switch then it is worth to context switch t_ctx_swtich thread < process because thread swtich doesn't need to map virtal to physical address. Hide more latency.
Multithread OS kernel
Thread data structure: identify threads, keep track of resource usage
Synchronization Mechanism:
Multual Exclusion
Waiting on other thread
Waking up other threads from wait state
Thread type: thread data structure: including thread ID, PC, SP, registers, stack, attributes
Fork (proc, args): create a new thread (not unix fork which create a new process exactly same as the calling process )
T0 calls T1 = fork(proc, args) T1 data structure: PC = proc, stack = args After the fork operation completes the process has two threads. T0 executes the instruction after fork. When T1 finishes
Join (thread): terminate a thread
A parent thread call join, child_result =join(t1), it will be blocked until the child is complete. Join will return the result of the child's computation to the parent. Any allocated data structure and resourses to child will be freed.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h> //Header file for sleep(). man 3 sleep for details.
#include <pthread.h>
#include <mutex>
using namespace std;
mutex m;
list<int> my_list;
void safe_insert(int i) {
mutex->lock(m) {
my_list.insert(i)
}
}
Condition variable
Wait(mutex, condition){ release mutex wait queue remove from queue reaquire mutex exit waiting operation }
Multi readers/1 writer:
Resource counter:
Mutex offers access to only one thread. A more complicated sharing seniar can be implimented with Proxy variable
When we wake threads up, knowing they may not be able to proceed.
When unlock after broadcast/signal, no other thread can get the lock.
Can we always unlock mutex before boradcast/signal
Two or more competing threads are waiting on each other to complete, but none of them ever do. Each waits on the other one.
A cycle in the wait graph is necessary and sufficient for a deadlock to occur (edges from thread waiting on a resource to thread owning the resource)
Each user level thread has a kernel level thread associated. When a user level thread is created, either a kernel level thread is created or an existing kernel level thread is associated. Pros: OS understand threads in terms of synchonization, blocking, scheduling. Application can benefit from the threading support from the OS
Cons:
All user level threads are mapped to a single kernel level thread. Application decide which user level thread is mapped to the kernel level thread at any given point of time. User level threads will run only when the mapped kernel level thread run on the cpu
Pros:
Cons:
Some user level threads can be associated with one kernel level th read, and others may be associated with other kernel level threads.
Pros:
Cons:
At the kernel level(System scope)system wide thread management by OS-level thread managers (e.g. CPU scheduler) At user level (process scope), user level library manages threads within a single process only.
Multi thread patterns:
#### boss-workers
#### pipeline:
shared buffer based communication between stage, similar to producer/consumer
sequence of stages
#### layered